11 research outputs found

    Dutkat: A Privacy-Preserving System for Automatic Catch Documentation and Illegal Activity Detection in the Fishing Industry

    Get PDF
    United Nations' Sustainable Development Goal 14 aims to conserve and sustainably use the oceans and their resources for the benefit of people and the planet. This includes protecting marine ecosystems, preventing pollution, and overfishing, and increasing scientific understanding of the oceans. Achieving this goal will help ensure the health and well-being of marine life and the millions of people who rely on the oceans for their livelihoods. In order to ensure sustainable fishing practices, it is important to have a system in place for automatic catch documentation. This thesis presents our research on the design and development of Dutkat, a privacy-preserving, edge-based system for catch documentation and detection of illegal activities in the fishing industry. Utilising machine learning techniques, Dutkat can analyse large amounts of data and identify patterns that may indicate illegal activities such as overfishing or illegal discard of catch. Additionally, the system can assist in catch documentation by automating the process of identifying and counting fish species, thus reducing potential human error and increasing efficiency. Specifically, our research has consisted of the development of various components of the Dutkat system, evaluation through experimentation, exploration of existing data, and organization of machine learning competitions. We have also implemented it from a compliance-by-design perspective to ensure that the system is in compliance with data protection laws and regulations such as GDPR. Our goal with Dutkat is to promote sustainable fishing practices, which aligns with the Sustainable Development Goal 14, while simultaneously protecting the privacy and rights of fishing crews

    On Optimizing Transaction Fees in Bitcoin using AI: Investigation on Miners Inclusion Pattern

    Get PDF
    This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Internet Technology, https://doi.org/10.1145/3528669.The transaction-rate bottleneck built into popular proof-of-work-based cryptocurrencies, like Bitcoin and Ethereum, leads to fee markets where transactions are included according to a first-price auction for block space. Many attempts have been made to adjust and predict the fee volatility, but even well-formed transactions sometimes experience unexpected delays and evictions unless a substantial fee is offered. In this paper, we propose a novel transaction inclusion model that describes the mechanisms and patterns governing miners decisions to include individual transactions in the Bitcoin system. Using this model we devise a Machine Learning (ML) approach to predict transaction inclusion. We evaluate our predictions method using historical observations of the Bitcoin network from a five month period that includes more than 30 million transactions and 120 million entries. We find that our Machine Learning (ML) model can predict fee volatility with an accuracy of up to 91%. Our findings enable Bitcoin users to improve their fee expenses and the approval time for their transactions

    Fishing Trawler Event Detection: An Important Step Towards Digitization of Sustainable Fishing

    Get PDF
    Detection of anomalies within data streams is an important task that is useful for different important societal challenges such as in traffic control and fraud detection. To be able to perform anomaly detection, unsupervised analysis of data is an important key factor, especially in domains where obtaining labelled data is difficult or where the anomalies that should be detected are often changing or are not clearly definable at all. In this article, we present a complete machine learning based pipeline for real-time unsupervised anomaly detection that can handle different input data streams simultaneously. We evaluate the usefulness of the proposed method using three wellknown datasets (fall detection, crime detection, and sport event detection) and a completely new and unlabelled dataset within the domain of commercial fishing. For all datasets, our method outperforms the baselines significantly and is able to detect relevant anomalies while simultaneously having low numbers of false positives. In addition to the good detection performance, the presented system can operate in real-time and is also very flexible and easy to expand

    File System Support for Privacy-Preserving Analysis and Forensics in Low-Bandwidth Edge Environments

    Get PDF
    In this paper, we present initial results from our distributed edge systems research in the domain of sustainable harvesting of common good resources in the Arctic Ocean. Specifically, we are developing a digital platform for real-time privacy-preserving sustainability management in the domain of commercial fishery surveillance operations. This is in response to potentially privacy-infringing mandates from some governments to combat overfishing and other sustainability challenges. Our approach is to deploy sensory devices and distributed artificial intelligence algorithms on mobile, offshore fishing vessels and at mainland central control centers. To facilitate this, we need a novel data plane supporting efficient, available, secure, tamper-proof, and compliant data management in this weakly connected offshore environment. We have built our first prototype of Dorvu, a novel distributed file system in this context. Our devised architecture, the design trade-offs among conflicting properties, and our initial experiences are further detailed in this paper

    H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images

    Get PDF
    Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took ~ 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering.publishedVersio

    H2G-Net: A multi-resolution refinement approach for segmentation of breast cancer region in gigapixel histopathological images

    Get PDF
    Over the past decades, histopathological cancer diagnostics has become more complex, and the increasing number of biopsies is a challenge for most pathology laboratories. Thus, development of automatic methods for evaluation of histopathological cancer sections would be of value. In this study, we used 624 whole slide images (WSIs) of breast cancer from a Norwegian cohort. We propose a cascaded convolutional neural network design, called H2G-Net, for segmentation of breast cancer region from gigapixel histopathological images. The design involves a detection stage using a patch-wise method, and a refinement stage using a convolutional autoencoder. To validate the design, we conducted an ablation study to assess the impact of selected components in the pipeline on tumor segmentation. Guiding segmentation, using hierarchical sampling and deep heatmap refinement, proved to be beneficial when segmenting the histopathological images. We found a significant improvement when using a refinement network for post-processing the generated tumor segmentation heatmaps. The overall best design achieved a Dice similarity coefficient of 0.933±0.069 on an independent test set of 90 WSIs. The design outperformed single-resolution approaches, such as cluster-guided, patch-wise high-resolution classification using MobileNetV2 (0.872±0.092) and a low-resolution U-Net (0.874±0.128). In addition, the design performed consistently on WSIs across all histological grades and segmentation on a representative × 400 WSI took ~ 58 s, using only the central processing unit. The findings demonstrate the potential of utilizing a refinement network to improve patch-wise predictions. The solution is efficient and does not require overlapping patch inference or ensembling. Furthermore, we showed that deep neural networks can be trained using a random sampling scheme that balances on multiple different labels simultaneously, without the need of storing patches on disk. Future work should involve more efficient patch generation and sampling, as well as improved clustering

    Njord: a fishing trawler dataset

    Get PDF
    Fish is one of the main sources of food worldwide. The commercial fishing industry has a lot of different aspects to consider, ranging from sustainability to reporting. The complexity of the domain also attracts a lot of research from different fields like marine biology, fishery sciences, cybernetics, and computer science. In computer science, detection of fishing vessels via for example remote sensing and classification of fish from images or videos using machine learning or other analysis methods attracts growing attention. Surprisingly, little work has been done that considers what is happening on board the fishing vessels. On the deck of the boats, a lot of data and important information are generated with potential applications, such as automatic detection of accidents or automatic reporting of fish caught. This paper presents Njord, a fishing trawler dataset consisting of surveillance videos from a modern off-shore fishing trawler at sea. The main goal of this dataset is to show the potential and possibilities that analysis of such data can provide. In addition to the data, we provide a baseline analysis and discuss several possible research questions this dataset could help answer

    Predicting Transaction Latency with Deep Learning in Proof-of-Work Blockchains

    Get PDF
    Proof-of-work based cryptocurrencies, like Bitcoin, have a fee market where transactions are included in the blockchain according to a first-price auction for block space. Many attempts have been made to adjust and predict the fee volatility, but even well-formed transactions sometimes experience delays and evictions unless an enormous fee is paid. % In this paper, we present a novel machine-learning model, solving a binary classification problem, that can predict transaction fee volatility in the Bitcoin network so that users can optimize their fees expenses and the approval time for their transactions. % The model's output will give a confidence score whether a new incoming transaction will be included in the next mined block. The model is trained on data from a longitudinal study of the Bitcoin blockchain, containing more than 10 million transactions. New features that we generate include information on how many bytes were already occupied by other transactions in the mempool, assuming they are ordered by fee density in each mining pool. The collected dataset allows to generate a model for transaction inclusion pattern prediction in the Bitcoin network, hence telling whether a transaction is well formed or not, according to the previous transactions analyzed. With this, we obtain a prediction score for up to 86%

    File System Support for Privacy-Preserving Analysis and Forensics in Low-Bandwidth Edge Environments

    No full text
    In this paper, we present initial results from our distributed edge systems research in the domain of sustainable harvesting of common good resources in the Arctic Ocean. Specifically, we are developing a digital platform for real-time privacy-preserving sustainability management in the domain of commercial fishery surveillance operations. This is in response to potentially privacy-infringing mandates from some governments to combat overfishing and other sustainability challenges. Our approach is to deploy sensory devices and distributed artificial intelligence algorithms on mobile, offshore fishing vessels and at mainland central control centers. To facilitate this, we need a novel data plane supporting efficient, available, secure, tamper-proof, and compliant data management in this weakly connected offshore environment. We have built our first prototype of Dorvu, a novel distributed file system in this context. Our devised architecture, the design trade-offs among conflicting properties, and our initial experiences are further detailed in this paper

    Áika: A Distributed Edge System for AI Inference

    Get PDF
    Video monitoring and surveillance of commercial fisheries in world oceans has been proposed by the governing bodies of several nations as a response to crimes such as overfishing. Traditional video monitoring systems may not be suitable due to limitations in the offshore fishing environment, including low bandwidth, unstable satellite network connections and issues of preserving the privacy of crew members. In this paper, we present Áika, a robust system for executing distributed Artificial Intelligence (AI) applications on the edge. Áika provides engineers and researchers with several building blocks in the form of Agents, which enable the expression of computation pipelines and distributed applications with robustness and privacy guarantees. Agents are continuously monitored by dedicated monitoring nodes, and provide applications with a distributed checkpointing and replication scheme. Áika is designed for monitoring and surveillance in privacy-sensitive and unstable offshore environments, where flexible access policies at the storage level can provide privacy guarantees for data transfer and access
    corecore